If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Taylor & Maclaurin polynomials intro (part 2)

Taylor & Maclaurin polynomials are a very clever way of approximating any function with a polynomial. In this video we come up with the general formula for the nth term in a Taylor polynomial. Created by Sal Khan.

## Want to join the conversation?

• At I am confused why Sal is only left with f'(c) after he expands f'(c)(x-c). You get f'(c)x - f'(c)c but then from there I just don't get it.
• You're taking the derivative. Here, x is variable; c, f'(c) are constants.

If you expand f'(c)(x - c), you get
f'(c)x - f'(c)c
The second term is two constants: its derivative is zero.
The first term is (some constant) * x: its derivative is (the constant). Which is f'(c).

If f'(c) gets in your way, you can rewrite it to remember it is a constant. For example, let
a = f'(c)
Then
d/dx ( ax ) = a
• I don't understand the point of a taylor expansion at c since the Maclaurin's series already did the job of evaluating it at zero. What is the additional function of a taylor series, or is it just trying to generalize expansions so we are not confined to only the point zero?
• Sometimes it's useful to be able to evaluate at some other point than zero, especially if the point we are actually interested in is far away from zero.

Suppose we wanted to find the value of sin 100. If we used a zero-centered Taylor series, we would have to calculate very many terms to get an accurate answer.
• Hi, first let me say thank you in advance for reviewing my questions. Second It should be noted that I am writing this because I am confused - so I realize that my lack of understanding may cause me to mischaracterize a some things. Also, a written expression of where I am are lost with this topic is not trivial to me.

I have yet to connect all of the parts of the Taylor Series into a sensible story when it comes to the Taylor Series for functions like Sine. My understanding is that a Taylor Series expansion can actually be equivalent to the Sine function ( I am aware that not all Taylor expansions equal the Function in question). I get that the series starts with identifying a point from which to expand the series and having the derivatives of the Sine function (in this case) and its Taylor series expansion match at this point. The process then (at a high level) uses the derivatives at this single point as one of several components that make up the terms in the Taylor series (other components include increasing polynomial degrees that purposefully map to the derivatives of the chosen expansion point) . These terms are added together and the more we add, the better the approximation to Sine, not just at that single point - but for all the points that one can input for Sine.

So what confuses me?

How is it that the equivalence of the Taylor Series and Sine derivatives at a single point enables us to take the results of the individual results of the Taylor Series version, and place them collectively into a summation of terms that correctly maps all of the inputs and outputs for Sine? I have been struggling to find a good explanation as to why or how this step in the process works. How does combining the derivatives with the other components of the terms make the Sine Taylor Series work? How can one interpret a process that takes the individual results of derivatives at a single point, attach them to their appropriate polynomial and then add them together? This is especially confusing knowing that Sine clearly has a repetitive nature to it. I do see this reflected in the Taylor series expansion by the repetitive derivatives in the terms, but seeing how Sine's pattern is enforced from the rest of the components of the terms is not clear. The other components have a of pattern increasing polynomial degrees , as well as, the n'th term being divided by the n'th factorial to make the derivatives work out, but what role is this pattern playing in realizing the Sine pattern? Also, why is it that the more you add, the better the approximation to Sine?

Perhaps there is another way to ask the question. What if you did not know how the Sine Taylor Series was constructed? You did not know things like it is made up of terms that allow the derivative to work out, etc. All you have is a polynomial that is claimed to approximate the Sine function for any input. Then how could one analyze and use the operations, objects, definitions, properties, and patterns associated with this infinite polynomial representation of Sine to conclude that it indeed is equivalent to the Sine function. And not just at a single point, but at all points?
• The only thing I don't understand is the (x-c) part at , why put the c and not simply use x, x^2, x^3 and so on, like on the Mclaurin series?
• It's a shift. It's like shifting the parabola function, y = x^2, three places to the left. You'd write it as y = (x+3)^2. To shift it c to the left, you'd use (x-c)^2. Or,in the case of the Taylor expansion, multiply the derivative(s) by (x-c).
• I'm having trouble understanding the difference between a Taylor Series and a Taylor Polynomial. Would anyone please explain how these relate to each other? Is a Taylor Series a string of a certain number of Taylor Polynomials?
Thank you!
• So are you saying that as n approaches infinity, the Taylor polynomial is equal to the Taylor series? In other words the Taylor polynomial can have any number of terms depending on the order we choose, whereas the Taylor series has an infinite number of terms
• Why exactly does adding more derivatives make the function approximation better? I know that the 2nd derivative involves concavity, inflection points, etc. But I just don't understand its connection to function approximation?
• You are trying to match two functions as closely as possible. Think of it this way:
There are a vast number of functions that have both the same f(0) and the same slope at that point, There are fewer that have the same original value, slope, and concavity. Match the third derivative and you have eliminated even more functions. Keep matching subsequent derivatives and you will systematically keep eliminating functions that don't match. If you could take infinitely many derivatives, then you would eliminate all other functions and you would be left only with functions that are equal to each other in infinitely many ways -- in other words, they would be identical.
• Why x-c at an approximate ???
• Do you remember how a parabola f(x) = x^2 can be translated? f(x) = x^2 is a parabola opening up, centered at 0 . A new function g(x) = (x - 2)^2 is just like f(x) = x^2, just moved over 2 units to the right on the x-axis. The same idea can be applied to this video. Instead of approximating the function at 0 , we approximate the function at a new point x = c. Hope that helps.
• Why would anyone use it and mess with (x - c)^n when we can just use Maclaurin?