Calculus, all content (2017 edition)
- Taylor & Maclaurin polynomials intro (part 1)
- Taylor & Maclaurin polynomials intro (part 2)
- Worked example: Maclaurin polynomial
- Worked example: coefficient in Maclaurin polynomial
- Worked example: coefficient in Taylor polynomial
- Taylor & Maclaurin polynomials
- Taylor polynomial remainder (part 1)
- Taylor polynomial remainder (part 2)
- Worked example: estimating sin(0.4) using Lagrange error bound
- Worked example: estimating eˣ using Lagrange error bound
- Lagrange error bound
- Visualizing Taylor polynomial approximations
- Worked example: Taylor polynomial of derivative function
Approximating eˣ with a Taylor polynomial centered at x=3. In the video we find the first few terms of such a polynomial and graph it to see how close it gets to eˣ. Created by Sal Khan.
Want to join the conversation?
- I would like to know why Taylor's series is a better approximation. Both values chosen are points on the function, why is it that any other point is better than when x=0.?
thank you.(11 votes)
- As a whole, the Taylor series does not approximate a function better per-se. It simply does a better job approximating a function at a particular point.
Say I wanted to approximate a function at x=1000 or x=1'000'000 or some other huge number. If I take a Maclauren expansion (i.e. a Taylor expansion at x = 0), then, odds are, my approximated function won't look anything like the actual function at x equals the huge number that I'm interested in. So, what I do is I take a Taylor approximation of that function right at that huge number. Now, the curve of my approximated function is closest to the curve of the actual function at that huge number that I'm interested in.
Similarly, if I am interested in approximating a function at x=0, it would not make any since to use a Taylor approximation at, say, 1'000'000. In this case, a Maclauren approximation would actually be better than a Taylor approximation taken at some other number.(37 votes)
- Why would you ever want a polynomial approximation of a function when you know the function?(11 votes)
- Thanks for the response!
Reasons for finding the first derivative or second derivative makes sense. Don't know why sine or cosine would need to be calculated by computers using polynomial approximations, but will take your word for it. I don't know anything about programming. I'll have to check out your statistics answer because I have studied statistics, but haven't run across this and looks like something interesting new to learn. I'd like to at least understand the intuition of it and that isn't clear at all right now.(5 votes)
- what is the intigraion of e^x^2(5 votes)
- It is extremely difficult to integrate e^x^2. At least, BC Calc does not deal with anything like that. But anyways, the answer is
where erfi is the imaginary error function.(5 votes)
- So, the bigger the number of terms in the Taylor polynomial, the greater the precision of my approximation.
This seems to mean that i can approximate with very low error any non polynomial function, provided that i have a powerful enough calculator.
This is clearly too good to be true, are there some functions that can't be approximated (with an arbitrarily high degree of precision) with this technique?(2 votes)
- Most functions (in a very precise sense of 'most') cannot be approximated by Taylor polynomials. Firstly, remember that we construct Taylor polynomials by taking repeated derivatives. So to have an infinite Taylor polynomial, a function must be differentiable infinitely many times.
Even if a function is infinitely differentiable, it's Taylor series may not always converge to the function. (The example given on Wikipedia is the function f(x)=e^(-1/x) when x>0, and f(x)=0 otherwise. If we try to construct a Taylor polynomial at x=0, we just get the 0 function.) So the property of having a usable Taylor series is actually a very restrictive and rare one in the grand scheme of things. We're just used to working with functions that are hand-picked to have a lot of nice properties like this.(8 votes)
- What is e? Is there a video that explains that?(0 votes)
- e is a transcendental and irrational number that pops up repeatedly in math and science. e is defined as the limit as n approaches infinity of (1 + 1/n)^(n). e is the base of the natural logarithm (ln).
The first few digits of e are 2.718281828459(8 votes)
- if the function is e^x and x is 3 how come you still write "(x-3)" wouldn't it be 0?(3 votes)
- Sal used the same variable name x in two different meanings in here. When sal writes x=3 he means the Taylor polynomial is centered around the value 3. You could call that x = 3, or by any other name you fancy.
When he writes the Taylor polynomial, the x in (x-3) is not a constant, but a variable. For the specific case where this x=3, we get P(x) = e^3 + (e^3 / 2!) * (3-3) + (e^3 / 3!) * (3-3)^2 ... = e^3, but this is only a specific case where we choose x = x.(2 votes)
- Ok, in cases when we make real approximations, i know the x in the P(x) stands for the argument, but what is it actually. For example i have to write e^x expansion to the third or fourth derivative, what do i write in the x-c part. 0? Then the whole series basically becomes zero.(3 votes)
- P(x) is a function of x. Therefore, x is variable while c is a constant.
For example, let's say I want to approximate e^x around the input c. Now I get this polynomial.
P(x) = e^c + e^c * (x-c) + e^c * (x-c)^2 + …
Now, c is a constant. If I wanted to find values close to 10, i'd set c=10. If I wanted values close to 3, I'd set c=3.
This would give me the function P(x) = e^3 + e^3 * (x-3) + e^c * (x-3)^2 + …
Now, as for x, it changes depending on what you want to find. If I want to know e^3.123, I can say I'm happy enough with c=3, and my x is 3.123. Here x-c would be 0.123, just as x changes in any other function, like y = 7*x + 42.(1 vote)
- my question is does the value of p(x) differ as we consider different c(about which the expansion is centered) if we take the same value of x for different c's?
The way i think is like the point around the assumed c have a better approximation over those far away from c.This is the way I interpreted the graph.
Can u please correct me if I'm wrong?(2 votes)
- Not really related to the matter in the video, but is there a way to find a function given its taylor series? If you have its taylor series sigma notation series, like the sum from n to infinity of (x^n)/n! and find out it is the e^x function somehow (and do that for every taylor series you can get)?
I tried thinking about discovering a formula for its nth derivative at 0, which for example, would be f^(n)(0) = 1 for f(x) = e^x, and to find this formula I can just multiply what is inside the sigma notation taylor series by n!/(x^n) and that would give me the nth derivative (at zero in this case) of the function, but this didn't help much, I couldn't find a way to find out a function by knowing every one of it's derivative at a given point.(2 votes)
- Interesting question. I'll have to refer you to another page, but I think the method there could work for many series like this; essentially, the writer transforms the series into a differential equation using some algebra and our known Taylor Series representations. It's pretty clever. Look at g_edgar's answers: https://www.physicsforums.com/threads/find-the-function-for-this-taylor-series.324948/ .
As far as how he could use the series for e^x in his proof -- I'm not sure there's a good way to simply look at a Taylor Series and change it into a function (which is what you seem to be asking), but if we can use the series of some simple functions that we already know, then we can find functions to represent much more complex Taylor Series. So a little bit of work on creating known Taylor representations gives us a lot of flexibility.
Hope that's what you were looking for.(3 votes)
- Why Sal and Wolfram Alfa ploted e^x at x=0 as y=0 if e^0 is 1? I dont want to be annoying, I am just worried about have lost something...(2 votes)
- I can't comment on the Wolfram Alpha graph, but the graph in this video, at x=0 it looks to me that y is a small positive value - kind of like the graph is more of a big picture, like this one:
Let's say we've got the function f of x is equal to e to the x. And just to get a sense of what that looks like, let me do a rough drawing of f of x is equal to e to the x. It would look something like this. So that is e to the x. And what I want to do is I want to approximate f of x is equal to e to the x using a Taylor series approximation, or a Taylor series expansion. And I want to do it not around x is equal to 0. I want to do it around x is equal to 3, just to pick another arbitrary value. So we're going to do it around x is equal to 3. This is x is equal to 3. This right there. That is f of 3. f of 3 is e to the third power. So this is e to the third power right over there. So when we take the Taylor series expansion, if we have a 0 degree polynomial approximating it, the best we could probably do is have a constant function going straight through e to the third. If we do a first order approximation, so we have a first degree term, then it will be the tangent line. And as we add more and more degrees to it, we should hopefully be able to kind of contour or converge with the curve better and better and better. And in the future, we'll talk a little bit more about how we can test for convergences and how well are we converging and all that type of thing. But with that said, let's just apply the formula that hopefully we got the intuition for in the last video. So the Taylor series expansion for f of x is equal to e to the x will be the polynomial. So what's f of c? Well, if x is equal to 3, we're saying that c is 3 in this situation. So if c is 3, f of 3 is e to the third power. So it's e to the third power plus-- what's f prime of c? Well f prime of x is also going to be e to the x. You take the derivative of e to the x, you get e to the x. That's one of the super cool things about e to the x. So this is also f prime of x. Frankly, this is the same thing as f the nth derivative of x. You could just keep taking the derivative of this and you'll get e to the x. So f prime of x is e to the x. You evaluate that at 3, you get e to the third power again times x minus 3, c is 3, plus the second derivative our function is still e to the x, evaluate that at 3, you get e to the third power over 2 factorial times x minus 3 to the second power. And then we could keep going. The third derivative is still e to the x. Evaluate that at 3. c is 3 in this situation. So you get e to the third power over 3 factorial times x minus 3 to the third power. And we can keep going with this, but I think you get the general idea. But what's even more interesting than just kind of going through the mechanics of finding the expansion, is seeing how as we add more and more terms, it starts to approximate e to the x better and better and better. And our approximation gets good further and further away from x is equal to 3. And to do that, I used WolframAlpha, available at wolframalpha.com. And I think I typed in Taylor series expansion e to the x and x equals 3. And it just knew what I wanted and gave me all of this business right over here. And it actually calculated the expansion. And you can see it's the exact same thing that we have over here, e to the third plus e to the third times x minus 3. We have e to the third plus e to the third times x minus 3 plus 1/2. They actually expanded out the factorial. So instead of 3 factorial, they wrote a 6 over here. And they did a bunch of terms up here. But what's even more interesting is that they actually graph each of these polynomials with more and more terms. So in orange, we have e to the x. We have f of x is equal to e to the x. And then they tell us, "order n approximation shown with n dots." So the order one approximation, so that should be the situation where we have a first degree polynomial, so that's literally-- a first degree polynomial would be these two terms right over here. Because this is a 0-th degree, this is a first degree. We just have x to the first power involved here. If we just were to plot this-- if this was our polynomial, that is plotted with 1 dot. And that is this one right over here, with one dot, and they plot it right over here. And we can see that it's just a tangent line at x is equal to 3. That is x is equal to 3 right over there. And so this is the tangent line. If we add a term, now we're getting to a second degree polynomial, because we're adding an x squared. If you expand this out, you'll have an x squared term, and then you'll have another x term, but the degree of the polynomial will now be a second degree. So let's look for two dots. So that's this one right over here. So let's see, two dots. Two dots coming in. See, you'll notice one, two dots. So you have two dots, and it comes in. And this is a parabola. It's a second degree polynomial, and then it comes back like this. But notice it does a better job, especially around x equals 3, of approximating e to the x. It stays with the curve a little bit longer. You add another term-- let me do this in a new color, a color that I have not used. You add another term. Now you have a third degree polynomial. If you have all of these combined, if this is your polynomial, and you were to graph that-- and so let's look for the three dots right over here. So one, two, three. So it's this curve. Third degree polynomial is this curve right over here. And notice, it starts contouring e to the x a little bit sooner than the second degree version. And it stays with it a little bit longer. And so you have it just like that. You add another term to it, you add the fourth degree term to it. So now we have all of this plus all of this. If this is your polynomial, now you have this curve right over here. Notice every time you add a term, it's getting better and better at approximating e to the x further and further away from x is equal to 3. And then if you add another term, you get this one up here. But hopefully that satisfies you, that we are getting closer and closer, the more terms we add. So you can imagine it's a pretty darn good approximation as we approach adding an infinite number of terms.