Those Pesky Error terms...
I am teaching the "numerical methods" class for engineers this year. I've learned many things while preparing the lessons. But one of the most fascinating things I've learned is how one can use the error terms of a series (say, a Taylor series) to obtain a weighted average of an approximating scheme to improve the accuracy of the approximation! Here is but one example: suppose one wants to approximate the derivative of a function, and one has only the function itself to work with, along with the fact that it is known (or hypothesized) that a certain number of higher order derivatives exist. Assume that we are working about x = 0.
Then, from the theory of Taylor Series:
f(x) = f(0) + f'(0)x + f''(0)(1/2)x^2 + f'''(0)(1/6)x^3+ terms where x has order 4 or more.
So, to approximate f'(0) one does some algebra to obtain
f'(0) = (f(x)-f(0))/x -f''(0)(1/2)x -f'''(0)(1/6)x^2 - terms where x has order 3 or more
Which is not a surprise as we are approximating the derivative by a difference quotient. Note that the error terms have order 2 or more. Call this approximating scheme F(x). Now, if we want to get our error terms to have order 2 or more:
Let F(x/2)= 2(f(x/2)-f(0))/x -f''(0)(1/4)x -f'''(0)(1/24)x^2 - terms where x has order 3 or more. So now, F(x)-2F(x/2) = -f'(0) + f'''(0) (-1/6 +1/12)x^2 - terms where x has order 3 or more
Then 2F(x/2)-F(x) = (4f(x/2)-f(x)-3f(0))/x is an approximation scheme which reduces the error of estimating f'(0) to a second order (roughly 4 times as good) at the expense of using one closer value of f in the scheme.
Similar ideas are used in many other approximation schemes, such as those to approximate integrals (e. g., Romberg integration) and those to approximate solutions to differential equations (Runge-Kutta schemes, for example). If you always wondered why those funny weighted averages were the way that they were, it was to reduce the order of the error terms as stated above.
Anyway, I never realized any of that when I previously taught calculus. Those pesky error terms in the series have a practical use!
Then, from the theory of Taylor Series:
f(x) = f(0) + f'(0)x + f''(0)(1/2)x^2 + f'''(0)(1/6)x^3+ terms where x has order 4 or more.
So, to approximate f'(0) one does some algebra to obtain
f'(0) = (f(x)-f(0))/x -f''(0)(1/2)x -f'''(0)(1/6)x^2 - terms where x has order 3 or more
Which is not a surprise as we are approximating the derivative by a difference quotient. Note that the error terms have order 2 or more. Call this approximating scheme F(x). Now, if we want to get our error terms to have order 2 or more:
Let F(x/2)= 2(f(x/2)-f(0))/x -f''(0)(1/4)x -f'''(0)(1/24)x^2 - terms where x has order 3 or more. So now, F(x)-2F(x/2) = -f'(0) + f'''(0) (-1/6 +1/12)x^2 - terms where x has order 3 or more
Then 2F(x/2)-F(x) = (4f(x/2)-f(x)-3f(0))/x is an approximation scheme which reduces the error of estimating f'(0) to a second order (roughly 4 times as good) at the expense of using one closer value of f in the scheme.
Similar ideas are used in many other approximation schemes, such as those to approximate integrals (e. g., Romberg integration) and those to approximate solutions to differential equations (Runge-Kutta schemes, for example). If you always wondered why those funny weighted averages were the way that they were, it was to reduce the order of the error terms as stated above.
Anyway, I never realized any of that when I previously taught calculus. Those pesky error terms in the series have a practical use!
0 Comments:
Post a Comment
<< Home