If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## AP®︎/College Calculus BC

### Course: AP®︎/College Calculus BC>Unit 10

Lesson 11: Finding Taylor polynomial approximations of functions

# Visualizing Taylor polynomial approximations

AP.CALC:
LIM‑8 (EU)
,
LIM‑8.A (LO)
,
LIM‑8.A.1 (EK)
,
LIM‑8.A.2 (EK)
,
LIM‑8.B (LO)
,
LIM‑8.B.1 (EK)
Approximating eˣ with a Taylor polynomial centered at x=3. In the video we find the first few terms of such a polynomial and graph it to see how close it gets to eˣ. Created by Sal Khan.

## Want to join the conversation?

• I would like to know why Taylor's series is a better approximation. Both values chosen are points on the function, why is it that any other point is better than when x=0.?
thank you. •  As a whole, the Taylor series does not approximate a function better per-se. It simply does a better job approximating a function at a particular point.

Say I wanted to approximate a function at x=1000 or x=1'000'000 or some other huge number. If I take a Maclauren expansion (i.e. a Taylor expansion at x = 0), then, odds are, my approximated function won't look anything like the actual function at x equals the huge number that I'm interested in. So, what I do is I take a Taylor approximation of that function right at that huge number. Now, the curve of my approximated function is closest to the curve of the actual function at that huge number that I'm interested in.

Similarly, if I am interested in approximating a function at x=0, it would not make any since to use a Taylor approximation at, say, 1'000'000. In this case, a Maclauren approximation would actually be better than a Taylor approximation taken at some other number.
• Why would you ever want a polynomial approximation of a function when you know the function? • Thanks for the response!
Reasons for finding the first derivative or second derivative makes sense. Don't know why sine or cosine would need to be calculated by computers using polynomial approximations, but will take your word for it. I don't know anything about programming. I'll have to check out your statistics answer because I have studied statistics, but haven't run across this and looks like something interesting new to learn. I'd like to at least understand the intuition of it and that isn't clear at all right now.
• what is the intigraion of e^x^2 • So, the bigger the number of terms in the Taylor polynomial, the greater the precision of my approximation.
This seems to mean that i can approximate with very low error any non polynomial function, provided that i have a powerful enough calculator.

This is clearly too good to be true, are there some functions that can't be approximated (with an arbitrarily high degree of precision) with this technique? • Most functions (in a very precise sense of 'most') cannot be approximated by Taylor polynomials. Firstly, remember that we construct Taylor polynomials by taking repeated derivatives. So to have an infinite Taylor polynomial, a function must be differentiable infinitely many times.

Even if a function is infinitely differentiable, it's Taylor series may not always converge to the function. (The example given on Wikipedia is the function f(x)=e^(-1/x) when x>0, and f(x)=0 otherwise. If we try to construct a Taylor polynomial at x=0, we just get the 0 function.) So the property of having a usable Taylor series is actually a very restrictive and rare one in the grand scheme of things. We're just used to working with functions that are hand-picked to have a lot of nice properties like this.
• What is e? Is there a video that explains that? • if the function is e^x and x is 3 how come you still write "(x-3)" wouldn't it be 0? • Sal used the same variable name x in two different meanings in here. When sal writes x=3 he means the Taylor polynomial is centered around the value 3. You could call that x = 3, or by any other name you fancy.
When he writes the Taylor polynomial, the x in (x-3) is not a constant, but a variable. For the specific case where this x=3, we get P(x) = e^3 + (e^3 / 2!) * (3-3) + (e^3 / 3!) * (3-3)^2 ... = e^3, but this is only a specific case where we choose x = x.
• Ok, in cases when we make real approximations, i know the x in the P(x) stands for the argument, but what is it actually. For example i have to write e^x expansion to the third or fourth derivative, what do i write in the x-c part. 0? Then the whole series basically becomes zero. • P(x) is a function of x. Therefore, x is variable while c is a constant.
For example, let's say I want to approximate e^x around the input c. Now I get this polynomial.
P(x) = e^c + e^c * (x-c) + e^c * (x-c)^2 + …
Now, c is a constant. If I wanted to find values close to 10, i'd set c=10. If I wanted values close to 3, I'd set c=3.
This would give me the function P(x) = e^3 + e^3 * (x-3) + e^c * (x-3)^2 + …
Now, as for x, it changes depending on what you want to find. If I want to know e^3.123, I can say I'm happy enough with c=3, and my x is 3.123. Here x-c would be 0.123, just as x changes in any other function, like y = 7*x + 42.
(1 vote)
• my question is does the value of p(x) differ as we consider different c(about which the expansion is centered) if we take the same value of x for different c's?

The way i think is like the point around the assumed c have a better approximation over those far away from c.This is the way I interpreted the graph.
Can u please correct me if I'm wrong? • Not really related to the matter in the video, but is there a way to find a function given its taylor series? If you have its taylor series sigma notation series, like the sum from n to infinity of (x^n)/n! and find out it is the e^x function somehow (and do that for every taylor series you can get)?

I tried thinking about discovering a formula for its nth derivative at 0, which for example, would be f^(n)(0) = 1 for f(x) = e^x, and to find this formula I can just multiply what is inside the sigma notation taylor series by n!/(x^n) and that would give me the nth derivative (at zero in this case) of the function, but this didn't help much, I couldn't find a way to find out a function by knowing every one of it's derivative at a given point. • Interesting question. I'll have to refer you to another page, but I think the method there could work for many series like this; essentially, the writer transforms the series into a differential equation using some algebra and our known Taylor Series representations. It's pretty clever. Look at g_edgar's answers: https://www.physicsforums.com/threads/find-the-function-for-this-taylor-series.324948/ .

As far as how he could use the series for e^x in his proof -- I'm not sure there's a good way to simply look at a Taylor Series and change it into a function (which is what you seem to be asking), but if we can use the series of some simple functions that we already know, then we can find functions to represent much more complex Taylor Series. So a little bit of work on creating known Taylor representations gives us a lot of flexibility.

Hope that's what you were looking for. 