简体   繁体   中英

Taylor series of cos x expansion in python

I want to calculate the summation of cosx series (while keeping the x in radian). This is the code i created:

import math
def cosine(x,n):
    sum = 0
    for i in range(0, n+1):
        sum += ((-1) ** i) * (x**(2*i)/math.factorial(2*i))
    return sum

and I checked it using math.cos() . It works just fine when I tried out small numbers:

print("Result: ", cosine(25, 1000))
print(math.cos(25))

the output:

Result: 0.991203540954667 0.9912028118634736

The number is still similar. But when I tried a bigger number, ie 40, it just returns a whole different value.

Result: 1.2101433786727471 -0.6669380616522619

Anyone got any idea why this happens?

The error term for a Taylor expansion increases the further you are from the point expanded about (in this case, x_0 = 0 ). To reduce the error, exploit the periodicity and symmetry by only evaluating within the interval [0, 2 * pi] :

def cosine(x, n):
    x = x % (2 * pi)
    total = 0
    for i in range(0, n + 1):
        total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
    return total

This can be further improved to [0, pi/2] :

def cosine(x, n):
    x = x % (2 * pi)
    if x > pi:
        x = abs(x - 2 * pi)
    if x > pi / 2:
        return -cosine(pi - x, n)
    total = 0
    for i in range(0, n + 1):
        total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
    return total

Contrary to the answer you got, this Taylor series converges regardless of how large the argument is. The factorial in the terms' denominators eventually drives the terms to 0.

But before the factorial portion dominates, terms can get larger and larger in absolute value. Native floating point doesn't have enough bits of precision to keep enough information for the low-order bits to survive.

Here's a way that doesn't lose any bits of precision. It's not practical because it's slow. Trust me when I tell you that it typically takes years of experience to learn how to write practical, fast, high-quality math libraries.

def mycos(x, nbits=100):
    from fractions import Fraction
    x2 = - Fraction(x) ** 2
    i = 0
    ntries = 0
    total = term = Fraction(1)
    while True:
        ntries += 1
        term = term * x2 / ((i+1) * (i+2))
        i += 2
        total += term
        if (total // term).bit_length() > nbits:
            break
    print("converged to >=", nbits, "bits in", ntries, "steps")
    return total

and then your examples:

>>> mycos(25)
converged to >= 100 bits in 60 steps
Fraction(177990265631575526901628601372315751766446600474817729598222950654891626294219622069090604398951917221057277891721367319419730580721270980180746700236766890453804854224688235663001, 179569976498504495450560473003158183053487302118823494306831203428122565348395374375382001784940465248260677204774780370309486592538808596156541689164857386103160689754560975077376)
>>> float(_)
0.9912028118634736
>>> mycos(40)
converged to >= 100 bits in 82 steps
Fraction(-41233919211296161511135381283308676089648279169136751860454289528820133116589076773613997242520904406094665861260732939711116309156993591792484104028113938044669594105655847220120785239949370429999292710446188633097549, 61825710035417531603549955214086485841025011572115538227516711699374454340823156388422475359453342009385198763106309156353690402915353642606997057282914587362557451641312842461463803518046090463931513882368034080863251)
>>> float(_)
-0.6669380616522619

Things to note:

  • The full-precision results require lots of bits.

  • Rounded back to float, they identically match what you got from math.cos() .

  • It doesn't require anywhere near 1000 steps to converge.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM